Optimality Conditions and a Smoothing Trust Region Newton Method for NonLipschitz Optimization

نویسندگان

  • Xiaojun Chen
  • Lingfeng Niu
  • Ya-Xiang Yuan
چکیده

Abstract. Regularized minimization problems with nonconvex, nonsmooth, perhaps nonLipschitz penalty functions have attracted considerable attention in recent years, owing to their wide applications in image restoration, signal reconstruction and variable selection. In this paper, we derive affine-scaled second order necessary and sufficient conditions for local minimizers of such minimization problems. Moreover, we propose a global convergent smoothing trust region Newton method which can find a point satisfying the affine-scaled second order necessary optimality condition from any starting point. Numerical examples are given to demonstrate the effectiveness of the smoothing trust region Newton method.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A limited memory adaptive trust-region approach for large-scale unconstrained optimization

This study concerns with a trust-region-based method for solving unconstrained optimization problems. The approach takes the advantages of the compact limited memory BFGS updating formula together with an appropriate adaptive radius strategy. In our approach, the adaptive technique leads us to decrease the number of subproblems solving, while utilizing the structure of limited memory quasi-Newt...

متن کامل

Superlinear Convergence of Affine-Scaling Interior-Point Newton Methods for Infinite-Dimensional Nonlinear Problems with Pointwise Bounds

We develop and analyze a superlinearly convergent aane-scaling interior-point Newton method for innnite-dimensional problems with pointwise bounds in L p-space. The problem formulation is motivated by optimal control problems with L p-controls and pointwise control constraints. The nite-dimensional convergence theory by Coleman and Li (SIAM J. Optim., 6 (1996), pp. 418{445) makes essential use ...

متن کامل

Newton-Type Methods for Non-Convex Optimization Under Inexact Hessian Information

We consider variants of trust-region and cubic regularization methods for nonconvex optimization, in which the Hessian matrix is approximated. Under mild conditions on the inexact Hessian, and using approximate solution of the corresponding sub-problems, we provide iteration complexity to achieve ǫ-approximate second-order optimality which have shown to be tight. Our Hessian approximation condi...

متن کامل

Local Convergence of Filter Methods for Equality Constrained Nonlinear Programming

In [10] we discuss general conditions to ensure global convergence of inexact restoration filter algorithms for nonlinear programming. In this paper we show how to avoid the Maratos effect by means of a second order correction. The algorithms are based on feasibility and optimality phases, which can be either independent or not. The optimality phase differs from the original one only when a ful...

متن کامل

A Trust Region Algorithm for Solving Nonlinear Equations (RESEARCH NOTE)

This paper presents a practical and efficient method to solve large-scale nonlinear equations. The global convergence of this new trust region algorithm is verified. The algorithm is then used to solve the nonlinear equations arising in an Expanded Lagrangian Function (ELF). Numerical results for the implementation of some large-scale problems indicate that the algorithm is efficient for these ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • SIAM Journal on Optimization

دوره 23  شماره 

صفحات  -

تاریخ انتشار 2013